The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The task of response selection in multi-turn dialogue is to find the best option from all candidates. In order to improve the reasoning ability of the model, previous studies pay more attention to using explicit algorithms to model the dependencies between utterances, which are deterministic, limited and inflexible. In addition, few studies consider differences between the options before and after reasoning. In this paper, we propose an Implicit Relational Reasoning Graph Network to address these issues, which consists of the Utterance Relational Reasoner (URR) and the Option Dual Comparator (ODC). URR aims to implicitly extract dependencies between utterances, as well as utterances and options, and make reasoning with relational graph convolutional networks. ODC focuses on perceiving the difference between the options through dual comparison, which can eliminate the interference of the noise options. Experimental results on two multi-turn dialogue reasoning benchmark datasets MuTual and MuTual+ show that our method significantly improves the baseline of four pretrained language models and achieves state-of-the-art performance. The model surpasses human performance for the first time on the MuTual dataset.
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
在本文中,我们研究了现实世界图像脱毛的问题,并考虑了改善深度图像脱布模型的性能的两个关键因素,即培训数据综合和网络体系结构设计。经过现有合成数据集训练的脱毛模型在由于域移位引起的真实模糊图像上的表现较差。为了减少合成和真实域之间的域间隙,我们提出了一种新颖的现实模糊合成管道来模拟摄像机成像过程。由于我们提出的合成方法,可以使现有的Deblurring模型更强大,以处理现实世界的模糊。此外,我们开发了一个有效的脱蓝色模型,该模型同时捕获特征域中的非本地依赖性和局部上下文。具体而言,我们将多路径变压器模块介绍给UNET架构,以进行丰富的多尺度功能学习。在三个现实世界数据集上进行的全面实验表明,所提出的Deblurring模型的性能优于最新方法。
translated by 谷歌翻译
很少有射击分类需要深层神经网络才能仅从有限的培训图像中学习广义表示,这在低数据制度中很有挑战,但很重要。最近,基于剪辑的方法显示出有希望的很少的射击性能受益于对比的语言图像预训练。基于这一点,我们质疑大规模的预训练是否可以减轻少数数据的缺陷,并通过预测的知识帮助代表性学习。在本文中,我们提出了Como,这是对预培训模型的合作,该模型结合了来自各种培训范式的各种先验知识,以获得更好的几次学习。我们的科莫包括:剪辑的语言对比知识,迪诺的视力对抗性知识以及达尔 - E的语言基础知识。具体而言,科莫在两个方面工作:很少的数据扩展和多样化的知识合奏。首先,我们通过零摄影dall-e生成合成图像,以丰富少量训练数据,而无需任何人力。另一方面,我们引入了一个可学习的多知识适配器(MK-apapter),以适应剪辑和恐龙的预测。通过这种合作,COMO可以完全释放不同的预训练方法的潜力,并将其统一以进行几次分类。我们在11个数据集上进行了广泛的实验,以证明我们方法的优势和概括能力。
translated by 谷歌翻译
本文介绍了Kings Arena的荣誉,Kings Arena是基于国王荣誉的强化学习(RL)环境,这是世界上最受欢迎的游戏之一。与以前大多数工作中研究的其他环境相比,我们的人对竞争性强化学习提出了新的概括挑战。与对手竞争的一个代理商是一个多代理的问题;它需要概括能力,因为它具有控制和不同的对手竞争的不同目标。我们描述了国王域名荣誉的观察,动作和奖励规范,并提供了一个基于python的开源界面,以与游戏引擎进行通信。我们为纪念国王竞技场的二十个目标英雄提供了各种任务,并为具有可行的计算资源的基于RL的方法提供了初始基线结果。最后,我们展示了国王竞技场的荣誉和对挑战的可能补救措施所面临的概括挑战。所有软件(包括环境级)均可在https://github.com/tencent-ailab/hok_env上公开获得。该文档可在https://aiarena.tencent.com/hok/doc/上获得。
translated by 谷歌翻译
多视图数据通常在数据挖掘应用程序中遇到。从多视图数据中有效提取信息需要特定的聚类方法设计,以适应具有多种视图的数据,这是非平凡且具有挑战性的。在本文中,我们通过利用不同观点的常见和特定信息的双重表示,提出了一种新颖的一步多视图聚类方法。动机源于以下理由:多视图数据不仅包含视图之间的一致知识,还包含每个视图的独特知识。同时,为了使表示学习更具体地针对聚类任务,提出了一个单步学习框架,以整体整合表示表示和聚类分区。在此框架中,表示形式学习和聚类分区相互受益,从而有效地改善了聚类性能。在基准多视图数据集上进行的广泛实验的结果清楚地证明了该方法的优越性。
translated by 谷歌翻译
尽管具有明显的区分靶向分布样本的能力,但深度神经网络在检测异常分布数据方面的性能差。为了解决此缺陷,最先进的解决方案选择在离群值的辅助数据集上训练深网。这些辅助离群值的各种培训标准是根据启发式直觉提出的。但是,我们发现这些直观设计的离群训练标准可能会损害分布学习,并最终导致劣等的表现。为此,我们确定了分布不兼容的三个原因:矛盾的梯度,错误的可能性和分布变化。基于我们的新理解,我们通过调整深层模型和损耗函数的顶级设计,提出一种新的分布检测方法。我们的方法通过减少对分布特征的概率特征的干扰来实现分布兼容性。在几个基准上,我们的方法不仅可以实现最新的分布检测性能,而且还提高了分布精度。
translated by 谷歌翻译
两阶段探测器在3D对象检测中已广受欢迎。大多数两阶段的3D检测器都使用网格点,体素电网或第二阶段的ROI特征提取的采样关键点。但是,这种方法在处理不均匀分布和稀疏的室外点方面效率低下。本文在三个方面解决了这个问题。 1)动态点聚集。我们建议补丁搜索以快速在本地区域中为每个3D提案搜索点。然后,将最远的体素采样采样用于均匀采样点。特别是,体素尺寸沿距离变化,以适应点的不均匀分布。 2)Ro-Graph Poling。我们在采样点上构建本地图,以通过迭代消息传递更好地模型上下文信息和地雷关系。 3)视觉功能增强。我们引入了一种简单而有效的融合策略,以补偿具有有限语义提示的稀疏激光雷达点。基于这些模块,我们将图形R-CNN构建为第二阶段,可以将其应用于现有的一阶段检测器,以始终如一地提高检测性能。广泛的实验表明,图R-CNN的表现优于最新的3D检测模型,而Kitti和Waymo Open DataSet的差距很大。我们在Kitti Bev汽车检测排行榜上排名第一。代码将在\ url {https://github.com/nightmare-n/graphrcnn}上找到。
translated by 谷歌翻译
视觉变压器(VIT)是卷积神经网络(CNN)的强大替代方案,引起了很多关注。最近的工作表明,VIT也容易受到CNN等对抗性例子的影响。为了建立强大的VIT,一种直观的方法是应用对抗训练,因为它已被证明是完成强大CNN的最有效方法之一。但是,对抗性培训的一个主要局限性是其沉重的计算成本。 VIT所采用的自我注意力的机制是计算强度的操作,其费用随输入贴片的数量四次增加,从而使VIT上的对抗性训练更加耗时。在这项工作中,我们首先全面研究了有关各种视觉变压器的快速对抗训练,并说明了效率和鲁棒性之间的关系。然后,为了加快对VIT的对抗训练,我们提出了一种有效的注意力引导的对抗训练机制。具体而言,依靠自我注意的专长,我们在对抗训练过程中以注意引导策略的掉落策略积极地嵌入了每一层的某些斑块嵌入。纤细的自我发场模块大大加速了对VIT的对抗训练。只有65%的快速对抗训练时间,我们与具有挑战性的成像网基准相匹配。
translated by 谷歌翻译